Rapid Feature Learning with Stacked Linear Denoisers

نویسندگان

  • Zhixiang Eddie Xu
  • Kilian Q. Weinberger
  • Fei Sha
چکیده

We investigate unsupervised pre-training of deep architectures as feature generators for “shallow” classifiers. Stacked Denoising Autoencoders (SdA) [23], when used as feature pre-processing tools for SVM classification, can lead to significant improvements in accuracy – however, at the price of a substantial increase in computational cost. In this paper we create a simple algorithm which mimics the layer by layer training of SdAs. However, in contrast to SdAs, our algorithm requires no training through gradient descent as the parameters can be computed in closed-form. It can be implemented in less than 20 lines of MATLABand reduces the computation time from several hours to mere seconds. We show that our feature transformation reliably improves the results of SVM classification significantly on all our data sets – often outperforming SdAs and even deep neural networks in three out of four deep learning benchmarks.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Marginalized Stacked Denoising Autoencoders

Stacked Denoising Autoencoders (SDAs) [4] have been used successfully in many learning scenarios and application domains. In short, denoising autoencoders (DAs) train one-layer neural networks to reconstruct input data from partial random corruption. The denoisers are then stacked into deep learning architectures where the weights are fine-tuned with back-propagation. Alternatively, the outputs...

متن کامل

Training Deep Learning based Denoisers without Ground Truth Data

Recent deep learning based denoisers are trained to minimize the mean squared error (MSE) between the output of a network and the ground truth noiseless image in the training data. Thus, it is crucial to have high quality noiseless training data for high performance denoisers. Unfortunately, in some application areas such as medical imaging, it is expensive or even infeasible to acquire such a ...

متن کامل

Approximate Message Passing with A Class of Non-Separable Denoisers

Approximate message passing (AMP) is a class of low-complexity scalable algorithms for solving high-dimensional linear regression tasks where one wishes to recover an unknown signal β0 from noisy, linear measurements y = Aβ0 + w. AMP has the attractive feature that its performance (for example, the mean squared error of its estimates) can be accurately tracked by a simple, scalar iteration refe...

متن کامل

Meta-Learning for Stacked Classification

In this paper we describe new experiments with the ensemble learning method Stacking. The central question in these experiments was whether meta-learning methods can be used to accurately predict various aspects of Stacking’s behaviour. The resulting contributions of this paper are twofold: When learning to predict the accuracy of stacked classifiers, we found that the single most important fea...

متن کامل

Application of Stacked Autoencoders to P300 Experimental Data

Deep learning has emerged as a new branch of machine learning in recent years. Some of the related algorithms have been reported to beat state-of-the-art approaches in many applications. The main aim of this paper is to verify one of the deep learning algorithms, specifically a stacked autoencoder, to detect the P300 component. This component, as a specific brain response, is widely used in the...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • CoRR

دوره abs/1105.0972  شماره 

صفحات  -

تاریخ انتشار 2011